Exploring concepts that engage with time, documentation, and mechanical systems:
A mechanical or electromechanical flip clock that physically displays time through rotating cards or panels.
Technical Considerations:
An object embedded with a small camera that autonomously captures a single photograph at an unpredictable moment each day. The device removes human agency from the act of photography, creating a archive of images taken without intention or awareness.
Technical Considerations:
Photographs are created from mined materials—silver, platinum, gold, and more. Every camera is constructed from extracted resources: rare earth minerals for sensors and screens, lithium for batteries, and silicon for chips. Thus, photography is inherently tied to abstract ideas of mining. We often speak of "capturing" images, but the apparatus itself is an extraction: materials violently removed from geological time and assembled into devices that claim to freeze moments.
Video does not capture movement; it fabricates the appearance of continuity from individual still frames. What we perceive as flow is actually fragmentation: separate extractions played in sequence, creating the illusion of time passing. The screen convinces us that something is moving when, in reality, nothing is in motion at all.
By carving stone and embedding a screen that plays looping video, this project literalizes the double extraction of photography. The stone is mined material, the screen is mined material, and the video represents extracted time. The hollowed cavity mimics the carved earth that provided the minerals found in the camera. The device becomes inseparable from its own material violence—you cannot view the moving image without confronting the extracted stone that frames it.
Stone holds millions of years of history. The screen displays 24 frames per second. One measures time in epochs; the other in intervals too brief for human perception. By housing manufactured motion within geological permanence, the project exposes the tension between photography's two temporalities: deep time (represented by the extracted minerals) and instantaneous time (symbolized by the extracted moment). Both forms of extraction create illusions.
Jorge Luis Borges' "On Exactitude in Science" (1946) describes an empire where cartographers create a map at 1:1 scale—identical in size to the territory itself. This impossible map "coincided point for point" with the empire, rendering it useless. Future generations abandoned it "to the Inclemencies of Sun and Winters," leaving only "Tattered Ruins" in the desert.
"If the map of my room is bigger than my room, then I have to leave my room to learn where something in my room is."
This 1:1 scale represents the corporeal limit—the point where representation collapses into the thing itself. Lewis Carroll explored this in "Sylvie and Bruno Concluded" (1893), proposing a map with "the scale of a mile to the mile" that "would cover the whole country, and shut out the sunlight," leading to the conclusion that "we now use the country itself, as its own map."
Jean Baudrillard used Borges' parable in "Simulacra and Simulation" (1981) to theorize hyperreality—how simulations become more real than reality itself. He identified four stages where representation progressively detaches from the real, culminating in "pure simulacrum, in which the simulacrum has no relationship to any reality whatsoever."
The stone viewing device operates at this limit. The screen's dimensions are determined entirely by the stone's cavity—a 1:1 relationship between container and content. The image cannot exceed its physical boundary; the stone defines the frame absolutely.
Paragraphica (2023) by Bjørn Karmann is a context-to-image camera that uses location data and AI to visualize a "photo" without capturing light. Using GPS coordinates, weather, time of day, and nearby places, it composes a paragraph describing the location, then converts this text into an AI-generated image. Karmann writes: "The resulting 'photo' is not just a snapshot, but a visual data visualization and reflection of the location you are at, and perhaps how the AI model 'sees' that place."
Inspired by the star-nosed mole—which perceives the world through touch rather than sight—Paragraphica asks how we might empathize with non-human intelligence. As AI systems develop perception, understanding how they "see" becomes increasingly urgent yet nearly impossible to imagine.
CCamera 2 (2019) by Marco Land responds to the idea that everything has already been photographed. When you press the capture button, CCamera swaps your screen with a visually similar image from existing archives. Land explains: "Normally you would be called the author of a photo if you've captured it. CCamera 'tricks' you into shooting a photo because you're also pressing the capture button. But in reality it is not yours."
Samsung Galaxy S21 Moon Mode (2022) uses AI to "enhance" moon photographs beyond what the camera sensor captures. The system recognizes the moon as an object, then applies a detail enhancement engine trained on high-resolution moon images. Samsung states: "Scene Optimizer uses AI processing to confirm whether to apply the detail enhancement engine, takes multiple pictures and synthesizes them into a single bright image with reduced noise, then uses deep learning-based AI detail enhancement."
Critics argue the phone isn't photographing the moon—it's recognizing a moon-shaped object and overlaying stored moon details. Samsung responded: "Samsung is continuously improving Scene Optimizer to reduce any potential confusion that may occur between taking a picture of the real moon and an image of the moon."
Genie 3 (2025) by Google DeepMind generates interactive 3D environments from text prompts in real-time at 24fps. Users can navigate these worlds that maintain consistency and respond to physics. The system introduces "promptable world events"—altering simulations with text commands like inserting a herd of deer into an existing scene. While Genie 3 doesn't understand physics, it makes accurate predictions by recognizing patterns in video footage, suggesting neural networks could eventually replace physics modeling.
These projects reveal photography's expanding boundaries—from light-based capture to data synthesis, from human authorship to algorithmic generation. They ask: what remains of photography when the camera no longer needs to see?
The moving panorama was a 19th-century storytelling device where a long painted scroll wound between two spools and cranked past a viewing window. As the landscape scrolled by, viewers experienced traveling through space—an analog precursor to film.
These devices created the illusion of movement through continuous horizontal scrolling, similar to watching scenery from a moving train. Popular subjects included scenic journeys, biblical stories, and adventure tales. The panorama presented a contradiction: the image was continuous and whole, but viewers could only access fragments at a time through the fixed frame.
The stone viewing device inverts this relationship. Instead of a continuous image moving past a fixed frame, a looping video plays within a permanent stone enclosure. The frame is not a window but a carved cavity—simultaneously revealing and containing the image. Where the panorama simulated travel through space, the stone device traps time within geological permanence.
Made Components
Sourced Components